Goto

Collaborating Authors

 horizon-independent minimax linear regression


Horizon-Independent Minimax Linear Regression

Neural Information Processing Systems

We consider online linear regression: at each round, an adversary reveals a covariate vector, the learner predicts a real value, the adversary reveals a label, and the learner suffers the squared prediction error. The aim is to minimize the difference between the cumulative loss and that of the linear predictor that is best in hindsight. Previous work demonstrated that the minimax optimal strategy is easy to compute recursively from the end of the game; this requires the entire sequence of covariate vectors in advance. We show that, once provided with a measure of the scale of the problem, we can invert the recursion and play the minimax strategy without knowing the future covariates. Further, we show that this forward recursion remains optimal even against adaptively chosen labels and covariates, provided that the adversary adheres to a set of constraints that prevent misrepresentation of the scale of the problem. This strategy is horizon-independent in that the regret and minimax strategies depend on the size of the constraint set and not on the time-horizon, and hence it incurs no more regret than the optimal strategy that knows in advance the number of rounds of the game. We also provide an interpretation of the minimax algorithm as a follow-the-regularized-leader strategy with a data-dependent regularizer and obtain an explicit expression for the minimax regret.


Reviews: Horizon-Independent Minimax Linear Regression

Neural Information Processing Systems

The problem of online linear regression is considered from an individual sequence perspective, where the aim is to control the square loss predictive regret with respect to the best linear predictor \theta \top x_t simultaneously for every sequence of covariate vectors x_t \in R d and outcomes y_t \in R in some constraint set. This is naturally formulated as a sequential game between the forecaster and an adversarial environment. In previous work [1], this problem was addressed in the "fixed-design" case, where the horizon T and the sequence of covariate vectors x_1 T is known in advance. The exact minimax strategy (MMS) was introduced and shown to be minimax optimal under natural constraint sets on the label sequence (such as ellipse-constrained labels). The MMS strategy consists in some form of least squares, but where the inverse cumulative covariance matrix \Pi_t {-1} is replaced by a shrunk version P_t that takes future instance into account.


Horizon-Independent Minimax Linear Regression

Neural Information Processing Systems

We consider online linear regression: at each round, an adversary reveals a covariate vector, the learner predicts a real value, the adversary reveals a label, and the learner suffers the squared prediction error. The aim is to minimize the difference between the cumulative loss and that of the linear predictor that is best in hindsight. Previous work demonstrated that the minimax optimal strategy is easy to compute recursively from the end of the game; this requires the entire sequence of covariate vectors in advance. We show that, once provided with a measure of the scale of the problem, we can invert the recursion and play the minimax strategy without knowing the future covariates. Further, we show that this forward recursion remains optimal even against adaptively chosen labels and covariates, provided that the adversary adheres to a set of constraints that prevent misrepresentation of the scale of the problem.


Horizon-Independent Minimax Linear Regression

Malek, Alan, Bartlett, Peter L.

Neural Information Processing Systems

We consider online linear regression: at each round, an adversary reveals a covariate vector, the learner predicts a real value, the adversary reveals a label, and the learner suffers the squared prediction error. The aim is to minimize the difference between the cumulative loss and that of the linear predictor that is best in hindsight. Previous work demonstrated that the minimax optimal strategy is easy to compute recursively from the end of the game; this requires the entire sequence of covariate vectors in advance. We show that, once provided with a measure of the scale of the problem, we can invert the recursion and play the minimax strategy without knowing the future covariates. Further, we show that this forward recursion remains optimal even against adaptively chosen labels and covariates, provided that the adversary adheres to a set of constraints that prevent misrepresentation of the scale of the problem.